Kana TV Logo

Kana TV

Ethiopia's Leading Private Broadcaster

Nazrawi Ghebreselasie, Kana TV, Kana Television, Naz Kana TV
Research

How Averages Distort Reality For Media Buyers

Audiences don’t experience content as a seven-day average. They experience it in moments.

Every quarter, audience reports arrive on agency desks. Rankings. Charts. A clear number one. And every quarter, the same ritual unfolds: agencies read the summary, accept the ranking, and recommend media plans based on a single averaged number.
No questions asked. No patterns examined. No temporal analysis. Just a blind acceptance of the weekly average as gospel truth. Temporal analysis means analyzing how something changes over time rather than treating it as a static or averaged number.
We’ve created an industry of data illiterates. Agencies with access to mountains of data but operating with hunter-gatherer cognitive patterns. Our brains evolved for immediate threat detection and quick pattern recognition, not strategic analysis. React fast to the rustling in the bushes. Spot the simple pattern. Make the quick decision. Survive the day. This served us well for 200,000 years.
It serves us terribly now.
Modern business demands exactly what our brains weren’t built for: resisting immediate conclusions, questioning obvious patterns, sitting with complexity, planning beyond the next quarter. But that cognitive wiring remains. Agencies see a number, their brain says “pattern recognized,” and they move on. No further analysis needed. The primitive instinct satisfied.
Clients are no different. They’ve inherited the same hunter-gatherer operating system. They need quick screenshots. Easy-to-digest summaries. Numbers they can drop into a presentation without thinking. Data that triggers the pattern-recognition reward, not the uncomfortable work of deep analysis. They want conclusions, not questions. They want to react, not strategize.
I encounter so many clients who are completely ill-equipped to interrogate the data they’re handed. Not because they lack intelligence, but because strategic analysis fights against every instinct evolution gave them. Their brains are screaming “you have the answer, move on to the next threat.” Sitting with ambiguity, questioning the obvious, demanding nuance when a clear ranking exists feels cognitively expensive. Unnecessary. Dangerous, even.
For example - take something as simple as “creative refresh”. I’ve shown clients data from Kantar proving that constantly changing creative resets effectiveness. Each new execution starts the memory curve over again. Familiarity, not novelty, is what builds mental availability. But a few days later, the same conversation returns—“We need more assets. Something fresh.” The evidence never stands a chance against instinct. Novelty feels like action. Consistency feels like complacency.
The problem isn’t the data. It’s that nobody in the chain can override the evolutionary programming that says “simple pattern detected, decision made, move on.” Not the agencies. Not the clients. Not the decision-makers who ultimately allocate millions based on these numbers.
We’re running a modern data-driven industry with Stone Age brains. And the weekly average is perfectly designed to exploit that mismatch.
Average the salaries of a company by including the CEO’s compensation and say everyone earns $200,000 a year. Technically true. Completely useless. The receptionist making $35,000 and the CEO making $3 million have nothing in common except an employer, yet the average pretends they occupy the same reality.
This is what happens every quarter with media ratings. Mathematical reality and lived experience have diverged so completely that you can be losing every single day and still claim victory by week’s end. And agencies, armed with research reports they never interrogate, will congratulate you for it.
When the Leaderboard Lies
Television ratings arrive with neat weekly rankings. A single leaderboard. One winner. But content consumption doesn’t work on a seven-day cycle. People watch in slots. Weekday routines. Weekend rituals. Category preferences that shift with context.
Think about how this plays out in media. One channel dominates Monday through Friday with drama programming that builds daily viewing habits. Another channel largely disappears during the work week but explodes on weekends with entertainment that pulls twice the weekday numbers.
Average those seven days together, and the weekend powerhouse floats to the top of the leaderboard. The report is accurate. The ranking is real. And it’s completely misleading about who owns the week.
This is like counting cars on a road over 24 hours and declaring there’s never any traffic because the average looks calm. The morning gridlock and the midnight empty road become a steady flow that exists nowhere in reality.
Why Research Gets This Wrong
Research companies approach media measurement the same way they approach fast-moving consumer goods. They aggregate purchase occasions, average consumption patterns, and report back tidy weekly summaries. It’s the FMCG playbook applied to a fundamentally different product.
But toothpaste doesn’t care what day of the week it is. Television does.
When you measure soap powder, temporal patterns matter less. When we say “temporal patterns matter less”, it means time-based variations don’t significantly change the outcome or behavior being measured. In the measurement of media consumption, temporal patterns are everything. A channel that delivers 2.0 ratings Monday through Friday and 10.0 on weekends is not the same as a channel delivering 4.0 every day, even though both average 4.0 for the week.
The first pattern suggests weekend activation potential. The second suggests consistent reach for habit formation. These are different strategic assets serving different objectives, yet the weekly average treats them as identical.
Research methodologies built for pantry products miss the entire point of time-based media consumption. And agencies, rather than questioning this mismatch, simply accept whatever the research company publishes each quarter. Clients, hungry for quick insights and simple narratives, reward this laziness by demanding nothing more than a top-line summary they can present to their bosses.
The Pattern Repeats Globally
Let me share an example from America that illustrates this perfectly. College football programming lives almost entirely on Saturdays. During the 2024 season, games like Georgia-Tennessee pulled 12.6 million viewers on ABC, and Texas-Ohio State drew 16.6 million on Fox. These are massive Saturday audiences.
Now imagine if someone averaged those college football numbers across all seven days of the week. You’d conclude the sport has modest daily reach. You’d completely miss the strategic reality: college football’s entire value is concentrated in weekend windows where it competes directly with professional sports for the largest audiences of the week.
If an agency simply looked at a weekly average, they’d undervalue Saturday completely and misunderstand where the audience actually concentrates. But this is exactly what happens when weekly rankings become the only lens through which media is evaluated.
The same pattern shows up in cable news. Fox News in the US explicitly separates its weekday and weekend programming because the audiences behave fundamentally differently.
Different days. Different viewers. Different behaviors. One average that obscures all of it.
A Statistical Truth Across Fields
This isn’t unique to television. Simpson’s Paradox, a statistical phenomenon documented since the 1950s, shows how aggregation distorts truth across every field.
The classic example comes from UC Berkeley in 1973. Aggregated admissions data suggested gender bias against women, but when researchers examined individual departments, the bias disappeared and in some cases reversed. Women were actually accepted at higher rates than men within departments, but they’d applied to more competitive programs with lower overall acceptance rates.
The aggregated truth was the opposite of the departmental truth.
In 2020, during COVID-19, Italy showed a higher overall case fatality rate than China. But when stratified by age group, China’s fatality rate was higher in every single age bracket. The paradox arose because Italy’s population was older, skewing the aggregate numbers in ways that hid the underlying reality.
And it shows up in consumer behavior constantly. Research on retail shopping patterns consistently finds that consumers shop completely differently on weekdays versus weekends, with clear preferences for shopping on days close to the weekend. Another study tracking online grocery orders found clear differences between weekdays and weekends in purchasing patterns.
Yet how do most businesses analyze their performance? Weekly averages. Monthly totals. Quarterly aggregates that blend these fundamentally different behaviors into numbers nobody actually experiences.
The Intra-Channel Penalty
The averaging problem doesn’t stop at comparing channels. It punishes individual producers and programming blocks within the same channel.
A producer creates a hit drama that dominates its 8pm weekday slot with a 6.0 rating. Strong performance. Clear audience engagement. But the channel also broadcasts other programs that drag down the overall number, pulling 1.0 here and 1.5 there. When you average the channel’s entire day, that 8pm hit gets buried in an overall channel average of 2.8.
Now that producer’s performance review arrives. Instead of being celebrated for owning their time slot, they’re told the channel is underperforming. Their success is penalized because someone else’s failure dragged down the average. The hit show and the struggling programs have nothing in common except a broadcast license, yet the average pretends they’re equally responsible for the channel’s overall number.
This is the ultimate absurdity of averaging. Not only does it obscure which channels win which dayparts, it also hides which programs within a channel actually deliver. You can produce excellence and get punished for someone else’s mediocrity. You can fail spectacularly and get credit for someone else’s hit.
Intra-channel averaging makes performance evaluation impossible. It rewards the wrong people and demoralizes the right ones.
Why Category Makes It Worse
Now layer in category dynamics, and the distortion multiplies even further.
Think about Coca-Cola versus PepsiCo. If you average total company portfolios, PepsiCo looks twice as big because Frito-Lay carries enormous weight. But in the beverages category? Coca-Cola still leads. Category leadership and portfolio averages are entirely different conversations.
So are weekend entertainment dominance and weekday drama habits.
One channel might own weekday habit formation for brand building while another captures weekend activation for immediate response. Both are valuable. Both serve different objectives. But one rank treats them as if they’re competing for the same job.
The Myth of Exclusive Audiences
The averaging problem gets worse when you consider audience overlap. No channel owns 100% of its viewers. Research into brand duplication, documented extensively in marketing science, shows that brands share customers with competitors in proportion to their market share. This is called the Duplication of Purchase Law.
A larger brand might have 50% market share and a smaller brand 35%. You’d assume the larger brand holds 100% loyalty from 50% of customers while the smaller holds 100% loyalty from 35%. But that’s not how it works. The larger brand shares 35% of its customers with the smaller brand, and the smaller brand shares 50% of its customers with the larger.
Consumers are predictably polygamous. They don’t pledge allegiance to one brand. They rotate through a repertoire.
The same applies to media. Channel A viewers watch Channel B. Channel B viewers watch Channel A. The overlap is substantial and predictable. Yet weekly rankings treat audiences as if they’re mutually exclusive tribes. They’re not. They’re the same people moving between options based on day, time, category, and mood.
This means a channel “winning” the week might be winning with borrowed audiences who spend most of their time elsewhere. And a channel ranking second might actually own deeper weekday habits with audiences who occasionally defect to weekend spectacle.
The weekly average obscures not just temporal patterns but also the fluid, overlapping nature of actual viewing behavior. Agencies, reading only the top-line ranking each quarter, miss all of it. And clients, presented with a clean summary slide, never ask for anything deeper.
What the Average Hides
When you blend seven days into one number, you lose everything that matters for decision-making:
Who owns weekday habit formation. The repeated exposure that builds mental availability.
Who owns weekend spike potential. The high-impact moments that drive short-term activation.
Which category carries which slot. Drama at 8pm weekdays versus variety at 8pm weekends are different universes.
Which programs within a channel actually perform. A hit show at 8pm gets penalized by weak morning programming. A failing afternoon block gets credit for someone else’s primetime success.
Peak-to-mean volatility. A channel with a 2.0 rating Monday through Friday and 10.0 on weekends looks identical to a channel with 4.0 every day (both average 4.0 weekly). But the first is feast-or-famine while the second is consistent reach.
How your specific objective aligns with these patterns. Are you building reach? Driving frequency? Targeting a category? Timing to consideration?
The degree of audience overlap. Which channel actually owns viewer loyalty versus which one just borrows audiences from competitors on certain days.
The average smooths out everything you actually need to see. But agencies, content with surface-level summaries, never dig deeper to find it. And clients, rewarding simplicity over substance, never demand that they do.
What to Measure Instead
This isn’t about making measurement more complicated. It’s about making it honest.
Measure slot share by daypart and day of week. Not weekly totals. Show the pattern, not the average.
Report program-level performance, not channel averages. A producer’s hit show at 8pm should be evaluated on its own merit, not dragged down by weak programming in other dayparts.
Calculate peak-to-mean ratios. Expose the volatility. A 5.0 rating built from consistent 4-6 range is different from a 5.0 built from 2s and 12s.
Report daypart-weighted reach for stated objectives. If the goal is weekday frequency for habit formation, weight those days. If it’s weekend activation, weight those.
Measure audience duplication across channels and days. Show which viewers are exclusive versus shared. Reveal the overlap that weekly averages pretend doesn’t exist.
Create a simple index showing who leads where and when. Replace one deceptive rank with a clear map of situational leadership at both channel and program level.
None of this is harder than what we already do. It’s just honest about what the numbers actually mean.
The Real-Time Data Problem
The ultimate solution to averaging’s distortions would be real-time, granular data that shows exactly who’s watching what, when. No weekly summaries. No aggregated reports. Just immediate visibility into audience behavior as it happens.
But traditional broadcast has a fundamental problem: the technology doesn’t allow it.
Satellite transmission and linear over-the-air broadcast don’t generate digital data streams that can be captured in real time. The signal goes out. Someone watches. But there’s no return path to collect viewing data at the moment it happens. Even Nielsen, the gold standard in television measurement, faces this challenge.
Nielsen’s traditional measurement relied on panels (households keeping diaries or using people meters) that reported viewing after the fact. While Nielsen has now integrated “big data” from cable set-top boxes and smart TV automatic content recognition, these sources have critical gaps. Over-the-air broadcast, which still accounts for about 18% of U.S. TV households, generates no digital footprint at all. Set-top boxes can exaggerate viewing by 145% to 260% because they often remain on when the TV is off. Smart TV data monitors only about 31% of available stations and gets blocked by some streaming apps.
More importantly, none of this is truly real-time. The data still requires processing, matching to broadcast schedules, calibration with panel measurements, and quality checks before it becomes usable. By the time the numbers arrive, you’re looking backward at what already happened.
Digital platforms and streaming services solved this problem years ago. They know exactly what you’re watching, second by second, because the content delivery itself is digital and bidirectional. Traditional broadcasters don’t have that luxury. The infrastructure wasn’t built for it.
This is why quarterly reports with weekly averages persist. Not because they’re good. Because the underlying technology of linear broadcast makes real-time, granular measurement extremely difficult. We’re averaging because we can’t see clearly enough to do anything else.
But that doesn’t make the averages useful. It just explains why they’re still here.
The Strategic Implication
If you chase the weekly average, you optimize for a pattern nobody experiences. You buy into weekend spikes when you need weekday consistency, or invest in weekday reach when your category converts on Saturdays. You treat audience overlap as if it doesn’t exist and plan as if viewers pledge exclusive loyalty.
Both patterns are legitimate. Both serve real business needs. But neither is universally superior. The question isn’t “who wins the week” but “who wins the moment that matters for my objective, with audiences who actually show up consistently versus those who just pass through.”
Leadership in media, and in most markets, is situational. It happens in hours, not weeks. In categories, not portfolios. With audiences who overlap and rotate, not tribes who remain faithful. In the context that matters for the job you’re trying to do.
That weekend-dominant channel? It legitimately leads on Saturdays and Sundays in entertainment programming. That’s real. That’s valuable. That might be exactly what weekend activation advertisers need. But those audiences are likely shared with weekday channels, making the “dominance” more complex than the ranking suggests.
That weekday-strong channel? It legitimately builds habit through consistent Monday-to-Friday exposure. That’s real. That’s valuable. That might be exactly what brand-building advertisers need. But again, these audiences aren’t exclusive property.
Neither is wrong. The weekly average just obscures which is right for you, and with whom you’re actually competing for attention.
Stop Averaging Your Reality Away
The comfort of a single number is seductive. One winner. Clean. Simple. Easy to report up the chain.
It’s also often wrong.
Every time we aggregate across meaningful differences (time, category, audience overlap, objective) we create a new number that represents nobody’s actual experience. We manufacture a center that doesn’t exist and mistake it for truth.
Just like that company salary average that includes the CEO’s millions, the weekly media average creates a fiction that serves no one except those too lazy to dig deeper.
The solution isn’t more data. It’s less averaging. And it requires agencies to actually interrogate the research they’re handed each quarter instead of accepting it uncritically. It requires clients to demand more than pretty screenshots and easy summaries.
Look at your business. Where are you blending rush hour with midnight? Where are you combining Monday and Saturday into “weekly”? Where are you treating fundamentally different behaviors as if they’re the same thing because they happen to share a time period? Where are you assuming exclusive audiences when viewers actually rotate between options?
The market looks completely different the moment you stop averaging it into submission. But you have to be willing to ask the hard questions that most agencies won’t. And clients have to be willing to accept that the easy answer is often the wrong one. This isn’t about having better data. We already have the data. This is about having the courage to actually look at it.
About Nazrawi Ghebreselasie
Nazrawi Ghebreselasie is the Co-Founder and CEO of Kana TV, Ethiopia’s leading television network known for bringing global entertainment and original local productions to millions of viewers. A mathematician and entrepreneur, Nazrawi also leads STRATIX, a strategy and creative consulting agency working at the intersection of media, technology, and culture across Africa. His work focuses on audience behavior, media innovation, and building data-driven creative ecosystems.